我们研究了在联合环境中从积极和未标记的(PU)数据中学习的问题,由于资源和时间的限制,每个客户仅标记其数据集的一小部分。与传统的PU学习中的设置不同,负面类是由单个类组成的,而由客户在联合设置中无法识别的否定样本可能来自客户未知的多个类。因此,在这种情况下,几乎无法应用现有的PU学习方法。为了解决这个问题,我们提出了一个新颖的框架,即使用正面和未标记的数据(FEDPU)联合学习,以通过利用其他客户的标记数据来最大程度地降低多个负面类别的预期风险。我们理论上分析了拟议的FedPU的概括结合。经验实验表明,FedPU比常规监督和半监督联盟的学习方法取得更好的性能。
translated by 谷歌翻译
Existing federated classification algorithms typically assume the local annotations at every client cover the same set of classes. In this paper, we aim to lift such an assumption and focus on a more general yet practical non-IID setting where every client can work on non-identical and even disjoint sets of classes (i.e., client-exclusive classes), and the clients have a common goal which is to build a global classification model to identify the union of these classes. Such heterogeneity in client class sets poses a new challenge: how to ensure different clients are operating in the same latent space so as to avoid the drift after aggregation? We observe that the classes can be described in natural languages (i.e., class names) and these names are typically safe to share with all parties. Thus, we formulate the classification problem as a matching process between data representations and class representations and break the classification model into a data encoder and a label encoder. We leverage the natural-language class names as the common ground to anchor the class representations in the label encoder. In each iteration, the label encoder updates the class representations and regulates the data representations through matching. We further use the updated class representations at each round to annotate data samples for locally-unaware classes according to similarity and distill knowledge to local models. Extensive experiments on four real-world datasets show that the proposed method can outperform various classical and state-of-the-art federated learning methods designed for learning with non-IID data.
translated by 谷歌翻译
Image token removal is an efficient augmentation strategy for reducing the cost of computing image features. However, this efficient augmentation strategy has been found to adversely affect the accuracy of CLIP-based training. We hypothesize that removing a large portion of image tokens may improperly discard the semantic content associated with a given text description, thus constituting an incorrect pairing target in CLIP training. To address this issue, we propose an attentive token removal approach for CLIP training, which retains tokens with a high semantic correlation to the text description. The correlation scores are computed in an online fashion using the EMA version of the visual encoder. Our experiments show that the proposed attentive masking approach performs better than the previous method of random token removal for CLIP training. The approach also makes it efficient to apply multiple augmentation views to the image, as well as introducing instance contrastive learning tasks between these views into the CLIP framework. Compared to other CLIP improvements that combine different pre-training targets such as SLIP and MaskCLIP, our method is not only more effective, but also much more efficient. Specifically, using ViT-B and YFCC-15M dataset, our approach achieves $43.9\%$ top-1 accuracy on ImageNet-1K zero-shot classification, as well as $62.7/42.1$ and $38.0/23.2$ I2T/T2I retrieval accuracy on Flickr30K and MS COCO, which are $+1.1\%$, $+5.5/+0.9$, and $+4.4/+1.3$ higher than the SLIP method, while being $2.30\times$ faster. An efficient version of our approach running $1.16\times$ faster than the plain CLIP model achieves significant gains of $+5.3\%$, $+11.3/+8.0$, and $+9.5/+4.9$ on these benchmarks.
translated by 谷歌翻译
Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples. However, although prompt learning achieves excellent performance over in-domain data, it still faces the major challenge of generalizing to unseen classes and domains. Some existing prompt learning methods tackle this issue by adaptively generating different prompts for different tokens or domains but neglecting the ability of learned prompts to generalize to unseen domains. In this paper, we propose a novel prompt learning paradigm that directly generates domain invariant prompt generalizable to unseen domains, called MetaPrompt. Specifically, a dual-modality prompt tuning network is proposed to generate prompts for inputs from both image and text modalities. More importantly, we propose a meta-learning-based prompt tuning algorithm that explicitly constrains the prompt tuned on a specific domain or class also to achieve good performance on another domain or class. Extensive experiments on 11 datasets for base-to-new generalization and four datasets for domain generalization demonstrate that our method consistently and significantly outperforms existing methods.
translated by 谷歌翻译
The potential of offline reinforcement learning (RL) is that high-capacity models trained on large, heterogeneous datasets can lead to agents that generalize broadly, analogously to similar advances in vision and NLP. However, recent works argue that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: ResNets, cross-entropy based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multi-task Atari as a testbed for scaling and generalization, we train a single policy on 40 games with near-human performance using up-to 80 million parameter networks, finding that model performance scales favorably with capacity. In contrast to prior work, we extrapolate beyond dataset performance even when trained entirely on a large (400M transitions) but highly suboptimal dataset (51% human-level performance). Compared to return-conditioned supervised approaches, offline Q-learning scales similarly with model capacity and has better performance, especially when the dataset is suboptimal. Finally, we show that offline Q-learning with a diverse dataset is sufficient to learn powerful representations that facilitate rapid transfer to novel games and fast online learning on new variations of a training game, improving over existing state-of-the-art representation learning approaches.
translated by 谷歌翻译
轻巧的飞行时间(TOF)深度传感器很小,便宜,低能量,并且已在移动设备上大量部署在移动设备上,以进行自动对焦,障碍物检测等。但是,由于其特定的测量值(深度分布)在某个像素时的区域而不是深度值,并且分辨率极低,它们不足以用于需要高保真深度(例如3D重建)的应用。在本文中,我们提出了Deltar,这是一种新颖的方法,可以通过与颜色图像合作来赋予高分辨率和准确深度的能力。作为Deltar的核心,提出了一种用于深度分布的特征提取器,并提出了基于注意力的神经体系结构,以有效地从颜色和TOF域中融合信息。为了在现实世界中评估我们的系统,我们设计了一个数据收集设备,并提出了一种校准RGB摄像头和TOF传感器的新方法。实验表明,我们的方法比旨在使用商品级RGB-D传感器的PAR性能实现的现有框架比现有的框架产生更准确的深度。代码和数据可在https://zju3dv.github.io/deltar/上获得。
translated by 谷歌翻译
我们介绍了Twhin-Bert,这是一种多语言语言模型,该模型在流行的社交网络Twitter上训练了内域数据。Twhin-bert与先前的预训练的语言模型有所不同,因为它不仅接受了基于文本的自学训练,而且还具有基于Twitter异质信息网络(TWHIN)中丰富社交活动的社会目标。我们的模型接受了70亿条推文的培训,涵盖了100多种不同的语言,为简短,嘈杂,用户生成的文本提供了有价值的表示形式。我们对各种多语言社会建议和语义理解任务进行评估,并证明了对既定的预训练的语言模型的大幅改进。我们将自由开放源代码Twhin-Bert和我们为研究社区提供的精心策划标签预测和社会参与基准数据集。
translated by 谷歌翻译
近年来,基于深度学习的模型在视频超分辨率(VSR)方面取得了显着性能,但是这些模型中的大多数不适用于在线视频应用程序。这些方法仅考虑失真质量,而忽略了在线应用程序的关键要求,例如低延迟和模型较低的复杂性。在本文中,我们专注于在线视频传输,其中需要VSR算法来实时生成高分辨率的视频序列。为了应对此类挑战,我们提出了一种基于一种新的内核知识转移方法,称为卷积核旁路移植物(CKBG)。首先,我们设计了一个轻巧的网络结构,该结构不需要将来的帧作为输入,并节省了缓存这些帧的额外时间成本。然后,我们提出的CKBG方法通过用``核移植物)''绕过原始网络来增强这种轻巧的基础模型,这些网络是包含外部预验证图像SR模型的先验知识的额外卷积内核。在测试阶段,我们通过将其转换为简单的单路结构来进一步加速移植的多支球网络。实验结果表明,我们提出的方法可以处理高达110 fps的在线视频序列,并且模型复杂性非常低和竞争性SR性能。
translated by 谷歌翻译
最近,人重新识别(REID)的隐私问题引起了越来越多的关注,并保留了REID方法使用的行人图像的隐私是必不可少的。去识别(DEID)方法通过删除与REID数据相关的身份来减轻隐私问题。但是,大多数现有的DEID方法倾向于删除所有与个人身份相关的信息,并损害REID任务上的识别数据的可用性。在本文中,我们旨在开发一种可以在REID人士的隐私保护和数据可用性之间实现良好权衡的技术。为了实现这一目标,我们提出了一种新颖的去识别方法,该方法是针对人雷德(Reid)明确设计的,命名人识别转移(PIS)。 PI在保留图像对之间的身份关系的同时,消除了行人图像中的绝对身份。通过利用变异自动编码器的插值属性,PI将每个行人图像从当前身份转移到具有新身份的另一个身份,从而导致图像仍然保留相对身份。实验结果表明,与现有的去识别方法相比,我们的方法在隐私保护和模型性能之间取决于更好的权衡,并且可以防御人类和模型攻击以确保数据隐私。
translated by 谷歌翻译
矢量图形(VG)在我们的日常生活中无处不在,在工程,建筑,设计等方面进行了广泛的应用。大多数现有方法的VG识别过程是首先将VG渲染为栅格图形(RG),然后基于行为识别。 RG格式。但是,此过程丢弃了几何结构并失去了VG的高分辨率。最近,提出了另一种类别的算法以直接从原始VG格式识别。但是它受RG渲染可以滤除的拓扑错误的影响。它不是查看一种格式,而是将VG和RG格式一起使用以避免这些缺点的好解决方案。此外,我们认为VG-TO-RG渲染过程对于有效组合VG和RG信息至关重要。通过指定有关如何将VG原语转移到RG像素的规则,渲染过程描述了VG和RG之间的相互作用和相关性。结果,我们提出了Rendnet,这是在2D和3D方案上识别的统一体系结构,该体系结构考虑VG/RG表示并通过结合VG-TO-RG栅格化过程来利用其相互作用。实验表明,Rendnet可以在各种VG数据集上的2D和3D对象识别任务上实现最新性能。
translated by 谷歌翻译